15 research outputs found

    Efficient Next-Hop Selection in Multi-Hop Routing for IoT Enabled Wireless Sensor Networks

    No full text
    The Internet of Things (IoT) paradigm allows the integration of cyber and physical worlds and other emerging technologies. IoT-enabled wireless sensor networks (WSNs) are rapidly gaining interest due to their ability to aggregate sensing data and transmit it towards the central or intermediate repositories, such as computational clouds and fogs. This paper presents an efficient multi-hop routing protocol (EMRP) for efficient data dissemination in IoT-enabled WSNs where hierarchy-based energy-efficient routing is involved. It considers a rank-based next-hop selection mechanism. For each device, it considers the residual energy to choose the route for data exchange. We extracted the residual energy at each node and evaluated it based on the connection degree to validate the maximum rank. It allowed us to identify the time slots for measuring the lifetime of the network. We also considered the battery expiry time of the first node to identify the network expiry time. We validated our work through extensive simulations using Network Simulator. We also implemented TCL scripts and C language code to configure low-power sensing devices, cluster heads and sink nodes. We extracted results from the trace files by utilizing AWK scripts. Results demonstrate that the proposed EMRP outperforms the existing related schemes in terms of the average lifetime, packet delivery ratio, time-slots, communication lost, communication area, first node expiry, number of alive nodes and residual energy

    Reliability-Aware Cooperative Routing with Adaptive Amplification for Underwater Acoustic Wireless Sensor Networks

    No full text
    The protocols in underwater acoustic wireless sensor networks (UAWSNs) that address reliability in packets forwarding usually consider the connectivity of the routing paths up to one- or two-hops. Since senor nodes are connected with one another using other nodes in their neighborhood, such protocols have compromised reliability. It is because these protocols do not guarantee the presence of neighbors beyond the selected one- or two-hops for connectivity and path establishment. This is further worsened by the harshness and unpredictability of the underwater scenario. In addition, establishment of the routing paths usually requires the nodes’ undersea geographical locations, which is infeasible because currents in water cause the nodes to move from one position to another. To overcome these challenges, this paper presents two routing schemes for UAWSNs: reliability-aware routing (RAR) and reliability-aware cooperative routing with adaptive amplification (RACAA). RAR considers complete path connectivity to advance packets to sea surface. This overcomes packets loss when connectivity is not established and forwarder nodes are not available for data routing. For all the established paths, the probability of successfully transmitting data packets is calculated. This avoids the adverse channel effects. However, sea channel is unpredictable and fluctuating and its properties may change after its computation and prior to information transmission. Therefore, cooperative routing is introduced to RAR with adaptive power control of relays, which makes the RACAA protocol. In RACAA, a relay node increases its transmit power than normal when the error in the data; it receives from the sender, is more than 50 % before transferring it further to destination. This further increases the reliability when such packets are forwarded. Unlike the conventional approach, the proposed protocols are independent of knowing the geographical locations of nodes in establishing the routes, which is computationally challenging due to nodes’ movements with ocean currents and tides. Simulation results exhibit that RAR and RACAA outperform the counterpart scheme in delivering packets to the water surface

    Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs

    No full text
    Road transportation is the backbone of modern economies, albeit it annually costs 1.25 million deaths and trillions of dollars to the global economy, and damages public health and the environment. Deep learning is among the leading-edge methods used for transportation-related predictions, however, the existing works are in their infancy, and fall short in multiple respects, including the use of datasets with limited sizes and scopes, and insufficient depth of the deep learning studies. This paper provides a novel and comprehensive approach toward large-scale, faster, and real-time traffic prediction by bringing four complementary cutting-edge technologies together: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). We trained deep networks using over 11 years of data provided by the California Department of Transportation (Caltrans), the largest dataset that has been used in deep learning studies. Several combinations of the input attributes of the data along with various network configurations of the deep learning models were investigated for training and prediction purposes. The use of the pre-trained model for real-time prediction was explored. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for smart cities, big data, high performance computing, and their convergence

    Rapid Transit Systems: Smarter Urban Planning Using Big Data, In-Memory Computing, Deep Learning, and GPUs

    No full text
    Rapid transit systems or metros are a popular choice for high-capacity public transport in urban areas due to several advantages including safety, dependability, speed, cost, and lower risk of accidents. Existing studies on metros have not considered appropriate holistic urban transport models and integrated use of cutting-edge technologies. This paper proposes a comprehensive approach toward large-scale and faster prediction of metro system characteristics by employing the integration of four leading-edge technologies: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). Using London Metro as a case study, and the Rolling Origin and Destination Survey (RODS) (real) dataset, we predict the number of passengers for six time intervals (a) using various access transport modes to reach the train stations (buses, walking, etc.); (b) using various egress modes to travel from the metro station to their next points of interest (PoIs); (c) traveling between different origin-destination (OD) pairs of stations; and (d) against the distance between the OD stations. The prediction allows better spatiotemporal planning of the whole urban transport system, including the metro subsystem, and its various access and egress modes. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for analysis of metro systems

    Large Field-Size Throughput/Area Accelerator for Elliptic-Curve Point Multiplication on FPGA

    No full text
    This article presents a throughput/area accelerator for elliptic-curve point multiplication over GF(2571). To optimize the throughput, we proposed an efficient hardware accelerator architecture for a fully recursive Karatsuba multiplier to perform polynomial multiplications in one clock cycle. To minimize the hardware resources, we have utilized the proposed Karatsuba multiplier for modular square implementations. Moreover, the Itoh-Tsujii algorithm for modular inverse computation is operated using multiplier resources. These strategies permit us to reduce the hardware resources of our implemented accelerator over a large field size of 571 bits. A controller is implemented to provide control functionalities. Our throughput/area accelerator is implemented in Verilog HDL using the Vivado IDE tool. The results after the place-and-route are given on Xilinx Virtex-6 and Virtex-7 devices. The utilized slices on Virtex-6 and Virtex-7 devices are 6107 and 5683, respectively. For the same FPGA devices, our accelerator can operate at a maximum of 319 MHz and 361 MHz. The latency values for Virtex-6 and Virtex-7 devices are 28.73 μs and 25.38 μs. The comparison to the state-of-the-art shows that the proposed architecture outperforms in throughput/area values. Thus, our accelerator architecture is suitable for cryptographic applications that demand a throughput and area simultaneously

    Machine Learning-Enabled Internet of Things (IoT): Data, Applications, and Industry Perspective

    No full text
    Machine learning (ML) allows the Internet of Things (IoT) to gain hidden insights from the treasure trove of sensed data and be truly ubiquitous without explicitly looking for knowledge and data patterns. Without ML, IoT cannot withstand the future requirements of businesses, governments, and individual users. The primary goal of IoT is to perceive what is happening in our surroundings and allow automation of decision-making through intelligent methods, which will mimic the decisions made by humans. In this paper, we classify and discuss the literature on ML-enabled IoT from three perspectives: data, application, and industry. We elaborate with dozens of cutting-edge methods and applications through a review of around 300 published sources on how ML and IoT work together to play a crucial role in making our environments smarter. We also discuss emerging IoT trends, including the Internet of Behavior (IoB), pandemic management, connected autonomous vehicles, edge and fog computing, and lightweight deep learning. Further, we classify challenges to IoT in four classes: technological, individual, business, and society. This paper will help exploit IoT opportunities and challenges to make our societies more prosperous and sustainable

    Machine Learning-Enabled Internet of Things (IoT): Data, Applications, and Industry Perspective

    No full text
    Machine learning (ML) allows the Internet of Things (IoT) to gain hidden insights from the treasure trove of sensed data and be truly ubiquitous without explicitly looking for knowledge and data patterns. Without ML, IoT cannot withstand the future requirements of businesses, governments, and individual users. The primary goal of IoT is to perceive what is happening in our surroundings and allow automation of decision-making through intelligent methods, which will mimic the decisions made by humans. In this paper, we classify and discuss the literature on ML-enabled IoT from three perspectives: data, application, and industry. We elaborate with dozens of cutting-edge methods and applications through a review of around 300 published sources on how ML and IoT work together to play a crucial role in making our environments smarter. We also discuss emerging IoT trends, including the Internet of Behavior (IoB), pandemic management, connected autonomous vehicles, edge and fog computing, and lightweight deep learning. Further, we classify challenges to IoT in four classes: technological, individual, business, and society. This paper will help exploit IoT opportunities and challenges to make our societies more prosperous and sustainable

    Cooperative and Delay Minimization Routing Schemes for Dense Underwater Wireless Sensor Networks

    No full text
    Symmetry in nodes operation in underwater wireless sensor networks (WSNs) is crucial so that nodes consume their energy in a balanced fashion. This prevents rapid death of nodes close to water surface and enhances network life span. Symmetry can be achieved by minimizing delay and ensuring reliable packets delivery to sea surface. It is because delay minimization and reliability are very important in underwater WSNs. Particularly, in dense underworks, packets reliability is of serious concern when a large number of nodes advance packets. The packets collide and are lost. This inefficiently consumes energy and introduces extra delay as the lost packets are usually retransmitted. This is further worsened by adaptation of long routes by packets as the network size grows, as this increases the collision probability of packets. To cope with these issues, two routing schemes are designed for dense underwater WSNs in this paper: delay minimization routing (DMR) and cooperative delay minimization routing (CoDMR). In the DMR scheme, the entire network is divided into four equal regions. The minor sink nodes are placed at center of each region, one in each of the four regions. Unlike the conventional approach, the placement of minor sink nodes in the network involves timer based operation and is independent of the geographical knowledge of the position of every minor sink. All nodes having physical distance from sink lower than the communication range are able to broadcast packets directly to the minor sink nodes, otherwise multi-hopping is used. Placement of the minor sinks in the four regions of the network avoids packets delivery to water surface through long distance multi-hopping, which minimizes delay and balances energy utilization. However, DMR is vulnerable to information reliability due to single path routing. For reliability, CoDMR scheme is designed that adds reliability to DMR using cooperative routing. In CoDMR, a node having physical distance from the sink greater than its communication range, sends the information packets by utilizing cooperation with a single relay node. The destination and the relay nodes are chosen by considering the lowest physical distance with respect to the desired minor sink node. The received packets at the destination node are merged by fixed ratio combining as a diversity technique. The physical distance computation is independent of the geographical knowledge of nodes, unlike the geographical routing protocols. This makes the proposed schemes computationally efficient. Simulation shows that DMR and CoDMR algorithms outperform the counterpart algorithms in terms of total energy cost, energy balancing, packet delivery ratio (PDR), latency, energy left in the battery and nodes depleted of battery power
    corecore